Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available March 4, 2026
-
Free, publicly-accessible full text available March 1, 2026
-
Free, publicly-accessible full text available January 1, 2026
-
Free, publicly-accessible full text available January 1, 2026
-
Operation efficiency in cyber physical system (CPS) has been significantly improved by digitalization of industrial control systems (ICS). However, digitalization exposes ICS to cyber attacks. Of particular concern are cyber attacks that trigger ICS failure. To determine how cyber attacks can trigger failures and thereby improve the resiliency posture of CPS, this study presents the Resiliency Graph (RG) framework that integrates Attack Graphs (AG) and Fault Trees (FT). RG uses AI planning to establish associations between vulnerabilities and system failures thereby enabling operators to evaluate and manage system resiliency. Our deterministic approach represents both system failures and cyber attacks as a structured set of prerequisites and outcomes using a novel AI planning language. AI planning is then used to chain together the causes and the consequences. Empirical evaluations on various ICS network configurations validate the framework’s effectiveness in capturing how cyber attacks trigger failures and the framework’s scalability.more » « less
-
While the development of proactive personal assistants has been a popular topic within AI research, most research in this direction tends to focus on a small subset of possible interaction settings. An important setting that is often overlooked is one where the users may have an incomplete or incorrect understanding of the task. This could lead to the user following incorrect plans with potentially disastrous consequences. Supporting such settings requires agents that are able to detect when the user's actions might be leading them to a possibly undesirable state and, if they are, intervene so the user can correct their course of actions. For the former problem, we introduce a novel planning compilation that transforms the task of estimating the likelihood of task failures into a probabilistic goal recognition problem. This allows us to leverage the existing goal recognition techniques to detect the likelihood of failure. For the intervention problem, we use model search algorithms to detect the set of minimal model updates that could help users identify valid plans. These identified model updates become the basis for agent intervention. We further extend the proposed approach by developing methods for pre-emptive interventions, to prevent the users from performing actions that might result in eventual plan failure. We show how we can identify such intervention points by using an efficient approximation of the true intervention problems, which are best represented as a Partially Observable Markov Decision-Process (POMDP). To substantiate our claims and demonstrate the applicability of our methodology, we have conducted exhaustive evaluations across a diverse range of planning benchmarks. These tests have consistently shown the robustness and adaptability of our approach, further solidifying its potential utility in real-world applications.more » « less
-
While the question of misspecified objectives has gotten much attention in recent years, most works in this area primarily focus on the challenges related to the complexity of the objective specification mechanism (for example, the use of reward functions). However, the complexity of the objective specification mechanism is just one of many reasons why the user may have misspecified their objective. A foundational cause for misspecification that is being overlooked by these works is the inherent asymmetry in human expectations about the agent's behavior and the behavior generated by the agent for the specified objective. To address this, we propose a novel formulation for the objective misspecification problem that builds on the human-aware planning literature, which was originally introduced to support explanation and explicable behavioral generation. Additionally, we propose a first-of-its-kind interactive algorithm that is capable of using information generated under incorrect beliefs about the agent to determine the true underlying goal of the user.more » « less
-
We describe our methodology for classifying ASL (American Sign Language) gestures. Rather than operate directly on raw images of hand gestures, we extract coor-dinates and render wireframes from individual images to construct a curated training dataset. This dataset is then used in a classifier that is memory efficient and provides effective performance (94% accuracy). Because we con-struct wireframes that contain information about several angles in the joints that comprise hands, our methodolo-gy is amenable to training those interested in learning ASL by identifying targeted errors in their hand gestures.more » « less
-
The adoption of digital technology in industrial control systems (ICS) enables improved control over operation, ease of system diagnostics and reduction in cost of maintenance of cyber physical systems (CPS). However, digital systems expose CPS to cyber-attacks. The problem is grave since these cyber-attacks can lead to cascading failures affecting safety in CPS. Unfortunately, the relationship between safety events and cyber-attacks in ICS is ill-understood and how cyber-attacks can lead to cascading failures affecting safety. Consequently, CPS operators are ill-prepared to handle cyber-attacks on their systems. In this work, we envision adopting Explainable AI to assist CPS oper-ators in analyzing how a cyber-attack can trigger safety events in CPS and then interactively determining potential approaches to mitigate those threats. We outline the design of a formal framework, which is based on the notion of transition systems, and the associated toolsets for this purpose. The transition system is represented as an AI Planning problem and adopts the causal formalism of human reasoning to asssit CPS operators in their analyses. We discuss some of the research challenges that need to be addressed to bring this vision to fruition.more » « less
-
Miller, Tim; Hoffman, Robert; Amir, Ofra; Holzinger, Andreas (Ed.)There is a growing interest within the AI research community in developing autonomous systems capable of explaining their behavior to users. However, the problem of computing explanations for users of different levels of expertise has received little research attention. We propose an approach for addressing this problem by representing the user's understanding of the task as an abstraction of the domain model that the planner uses. We present algorithms for generating minimal explanations in cases where this abstract human model is not known. We reduce the problem of generating an explanation to a search over the space of abstract models and show that while the complete problem is NP-hard, a greedy algorithm can provide good approximations of the optimal solution. We empirically show that our approach can efficiently compute explanations for a variety of problems and also perform user studies to test the utility of state abstractions in explanations.more » « less
An official website of the United States government

Full Text Available